Improved magnetic loss for TLM

نویسندگان

چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Loss Functions For Improved On-Policy Control

We introduce and empirically evaluate two novel online gradientbased reinforcement learning algorithms with function approximation – one model based, and the other model free. These algorithms come with the possibility of having non-squared loss functions which is novel in reinforcement learning, and seems to come with empirical advantages. We further extend a previous gradient based algorithm ...

متن کامل

Improved approximations for the Erlang loss model

Stochastic loss networks are often a very effective model for studying the random dynamics of systems requiring simultaneous resource possession. Given a stochastic network and a multi-class customer workload, the classical Erlang model renders the stationary probability that a customer will be lost due to insufficient capacity for at least one required resource type. Recently a novel family of...

متن کامل

TLM-2.0 in SystemVerilog

Transaction-level modeling (TLM) is a methodology for building models at high levels of abstraction, those above RTL. TLM-2.0 is a library that contains classes that implements a methodology for building transaction-level models in systemC and connecting them together. It was developed by OSCI and released in 2009 and is now on its way to becoming an IEEE standard as part of IEEE-1666-2011. In ...

متن کامل

TLM Generation with ESE

Without a well-structured system design methodology, it is no more doable to cope with the dramatically increasing complexity in modern embedded system designs. Platform methodology, which often goes with hardware/software co-design with virtual prototyping, is the most popular solution [3]. Although it has been helpful for reduction of engineering costs and the length of design cycle due to th...

متن کامل

Improved Loss Bounds For Multiple Kernel Learning

We propose two new generalization error bounds for multiple kernel learning (MKL). First, using the bound of Srebro and BenDavid (2006) as a starting point, we derive a new version which uses a simple counting argument for the choice of kernels in order to generate a tighter bound when 1-norm regularization (sparsity) is imposed in the kernel learning problem. The second bound is a Rademacher c...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Electronics Letters

سال: 1993

ISSN: 0013-5194

DOI: 10.1049/el:19930312